A Randomized Mirror-Prox Method for Solving Structured Large-Scale Matrix Saddle-Point Problems
نویسندگان
چکیده
In this paper, we derive a randomized version of the Mirror-Prox method for solving some structured matrix saddle-point problems, such as the maximal eigenvalue minimization problem. Deterministic first-order schemes, such as Nesterov’s Smoothing Techniques or standard Mirror-Prox methods, require the exact computation of a matrix exponential at every iteration, limiting the size of the problems they can solve. Our method allows us to use stochastic approximations of matrix exponentials. We prove that our randomized scheme decreases significantly the complexity of its deterministic counterpart for large-scale matrix saddle-point problems. Numerical experiments illustrate and confirm our theoretical results.
منابع مشابه
Large-scale semidefinite programming via a saddle point Mirror-Prox algorithm
In this paper, we first demonstrate that positive semidefiniteness of a large well-structured sparse symmetric matrix can be represented via positive semidefiniteness of a bunch of smaller matrices linked, in a linear fashion, to the matrix. We derive also the “dual counterpart” of the outlined representation, which expresses the possibility of positive semidefinite completion of a well-structu...
متن کاملMirror Prox algorithm for multi-term composite minimization and semi-separable problems
In the paper, we develop a composite version of Mirror Prox algorithm for solving convexconcave saddle point problems and monotone variational inequalities of special structure, allowing to cover saddle point/variational analogies of what is usually called “composite minimization” (minimizing a sum of an easy-to-handle nonsmooth and a general-type smooth convex functions “as if” there were no n...
متن کاملMirror Prox Algorithm for Multi-Term Composite Minimization and Alternating Directions
In the paper, we develop a composite version of Mirror Prox algorithm for solving convex-concave saddle point problems and monotone variational inequalities of special structure, allowing to cover saddle point/variational analogies of what is usually called “composite minimization” (minimizing a sum of an easy-to-handle nonsmooth and a general-type smooth convex functions “as if” there were no ...
متن کامل2 1 M ay 2 01 4 On solving large scale polynomial convex problems by randomized first - order algorithms ∗
One of the most attractive recent approaches to processing well-structured large-scale convex optimization problems is based on smooth convex-concave saddle point reformulation of the problem of interest and solving the resulting problem by a fast First Order saddle point method utilizing smoothness of the saddle point cost function. In this paper, we demonstrate that when the saddle point cost...
متن کاملOn Solving Large-Scale Polynomial Convex Problems by Randomized First-Order Algorithms
One of the most attractive recent approaches to processing well-structured large-scale convex optimization problems is based on smooth convex-concave saddle point reformulation of the problem of interest and solving the resulting problem by a fast first order saddle point method utilizing smoothness of the saddle point cost function. In this paper, we demonstrate that when the saddle point cost...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- SIAM Journal on Optimization
دوره 23 شماره
صفحات -
تاریخ انتشار 2013